03. Designing Loss Functions

03 Designing Loss Functions RENDER V1 V3

Objective Loss Functions

An objective function is typically a loss function that you seek to minimize (or in some cases maximize) during training a neural network. These are often expressed as a function that measures the difference between a prediction y_hat and a true target y.

\mathcal{L} (y, \hat{y})

The objective function we've used the most in this program is cross entropy loss, which is a negative log loss applied to the output of a softmax layer. For a binary classification problem, as in real or fake image data, we can calculate the binary cross entropy loss as:

-[y\log(\hat{y}) +(1-y) \log (1-\hat{y})]

In other words, a sum of two log losses!


In the notation in the next video, you'll see that y_hat is the output of the discriminator; our predicted class.